-
Notifications
You must be signed in to change notification settings - Fork 274
Implemented the possibility to load predictions from details files and continue evaluating from there #488
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…d continue evaluating from there.
This is very nice, thanks! We'll still have to add a saving step at the end of an evaluation type. For example, saving at the end of loglikelihood evals, or generative ones. (We should be able to safely assume that once the first item of a given type's batch runs, the rest will run fine too since they are sorted by inverse length) |
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
Sorry, I am not sure I fully follow. Please let me know what changes are necessary. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
…d continue evaluating from there (#488) * Implemented the possibility to load predictions from details files and continue evaluating from there. * Run model as fallback when no details can be loaded. * Improved loading speed and added more useful error messages. * Fixed typo. * Fixed gnarly bug with details loading to prevent loading too many examples. * Unpacking predictions to fix issue with weirdly saved predictions. * Made bulk loading easier by also allowing first timestamp more generally. * Made loading details more robust against tensors being saved in the details files.
Fixes #467.